Robot Localization Using Polygon Distances
نویسندگان
چکیده
We present an approach to the localization problem, for which polygon distances play an important role. In our setting of this problem the robot is only equipped with a map of its environment, a range sensor, and possibly a compass. To solve this problem, we first study an idealized version of it, where all data is exact and where the robot has a compass. This leads to the pure geometrical problem of fitting a visibility polygon into the map. This problem was solved very efficiently by Guibas, Motwani, and Raghavan. Unfortunately, their method is not applicable for realistic cases, where all the data is noisy. To overcome the problems we introduce a distance function, the polar coordinate metric, that models the resemblance between a range scan and the structures of the original method. We show some important properties of the polar coordinate metric and how we can compute it efficiently. Finally, we show how this metric is used in our approach and in our experimental Robot Localization Program RoLoPro. 1 The Localization Problem We investigate the first stage of the robot localization problem [3,12]: an autonomous robot is at an unknown position in an indoor-environment, for example a factory building, and has to do a complete relocalization, that is, determine its position and orientation. The application we have in mind here is a wake-up situation (e.g., after a power failure or maintenance works), where the robot is placed somewhere in its environment, powered on, and then “wants” to know where it is located. Note, that we do not assume any knowledge about previous configurations of the robot (before its shutdown), because the robot might have been moved meanwhile. In order to perform this task, the robot has a polygonal map of its environment and a range sensor (e.g., a laser radar), which provides the robot with a set of range measurements (usually at equidistant angles). The localization should This research is supported by the Deutsche Forschungsgemeinschaft (DFG) under project numbers No88/14-1 and No 88/14-2. Christensen et al. (Eds.): Sensor Based Intelligent Robots, LNAI 1724, pp. 200–219, 1999. c © Springer-Verlag Berlin Heidelberg 1999 Robot Localization Using Polygon Distances 201 Fig. 1. Polygonal map and its decomposition into visibility cells be performed using only this minimal equipment. In particular, the robot is not allowed to use landmarks (e.g., marks on the walls or on the floor). This should make it possible to use autonomous robots also in fields of application where it is not allowed or too expensive to change the environment. The localization process usually consists of two stages. First, the non-moving robot enumerates all hypothetical positions that are consistent with its sensor data, i.e., that yield the same visibility polygon. There can very well be several such positions if the map contains identical parts at different places (e.g., buildings with many identical corridors, like hospitals or libraries). All those positions cannot be distinguished by a non-moving robot. Figure 1 shows an example: the marked positions at the bottom of the two outermost niches cannot be distinguished using only their visibility polygons. If there is more than one hypothetical position, the robot eliminates the wrong hypotheses in the second stage and determines exactly where it is by traveling around in its environment. This is a typical on-line problem, because the robot has to consider the new information that arrives while the robot is exploring its environment. Its task is to find a path as efficient (i.e., short) as possible for eliminating the wrong hypotheses. Dudek et al. [4] have already shown that finding an optimal localization strategy is NP-hard, and described a competitive greedy strategy, the running time of which was recently improved by Schuierer [10]. This paper concentrates on the first stage of the localization process, that is, on generating the possible robot configurations (i.e., positions and orientations), although in our current work we also want to give solutions for the second stage, which can be applied in practice. With the additional assumption that the robot already knows its orientation (i.e., the robot has a compass) and all sensors 202 Oliver Karch et al. and the map are exact (i.e., without any noise), this problem turns into a pure geometric one, stated as follows: for a given map polygon M and a star-shaped polygon V (the visibility polygon of the robot), find all points p ∈ M that have V as their visibility polygon. Guibas et al. [5] described a scheme for solving this idealized version of the localization problem efficiently and we will briefly sketch their method in the following section. As this more theoretical method requires exact sensors and an exact map, it is not directly applicable in practice, where the data normally is noisy. In Sect. 3 we consider these problems and show in Sections 4 and 5 an approach to avoiding them, which uses distance functions to model the resemblance between the noisy range scans (from the sensor) and the structures of the original method (extracted from the possibly inexact map). 2 Solving the Geometric Problem In the following we sketch the method of Guibas et al., which is the basis for our approach described in Sections 4 and 5. We assume that the robot navigates on a plain surface with mostly vertical walls and obstacles such that the environment can be described by a polygon M, called the map polygon. Additionally, we assume that M has no holes (i.e., there are no free-standing obstacles in the environment), although the algorithm remains the same for map polygons with holes; the preprocessing costs, however, may be higher in that case. The (exact) range sensor generates the star-shaped visibility polygon V of the robot. As the range sensor is only able to measure the distances relative to its own position, we assume the origin of the coordinate system of V to be the position of the robot. Using the assumption that we have a compass, the geometric problem is then to find all points p ∈ M such that their visibility polygon Vp is identical with the visibility polygon V of the robot. The main idea of Guibas et al. [5] to solving this problem is to divide the map into finitely many visibility cells such that a certain structure (the visibility skeleton, which is closely related to the visibility polygon) does not change inside a cell. For a localization query we then do not search for points where the visibility polygon fits into the map, but instead for points where the corresponding skeleton does. That is, the continuous problem of fitting a visibility polygon into the map is discretized in a natural way by decomposing the map into visibility cells. 2.1 Decomposing the Map into Cells At preprocessing time the map M is divided into convex visibility cells by introducing straight lines forming the boundary of the cells such that the following property holds: 1 “Continuous” in the sense that we cannot find an ε > 0 such that the visibility polygon Vp of a point p moving by at most ε does not change. Robot Localization Using Polygon Distances 203
منابع مشابه
Using Polygon Distances for Localization
We consider the localization problem for a robot, which has only a range sensor and a polygonal map of its environment. An idealized version of this problem, where the robot additionally has a compass, was solved by Guibas, Motwani, and Raghavan. Unfortunately, their method is restricted to scenarios where all the sensors and the map are exact, that is, without any noise. Our approach for adapt...
متن کاملMap-merging in Multi-robot Simultaneous Localization and Mapping Process Using Two Heterogeneous Ground Robots
In this article, a fast and reliable map-merging algorithm is proposed to produce a global two dimensional map of an indoor environment in a multi-robot simultaneous localization and mapping (SLAM) process. In SLAM process, to find its way in this environment, a robot should be able to determine its position relative to a map formed from its observations. To solve this complex problem, simultan...
متن کاملReduction of Odometry Error in a two Wheeled Differential Drive Robot (TECHNICAL NOTE)
Pose estimation is one of the vital issues in mobile robot navigation. Odometry data can be fused with absolute position measurements to provide better and more reliable pose estimation. This paper deals with the determination of better relative localization of a two wheeled differential drive robot by means of odometry by considering the influence of parameters namely weight, velocity, wheel p...
متن کاملThe Robot Localization Problem
We consider the following problem: given a simple polygon P and a star-shaped polygon V, nd a point (or the set of points) in P from which the portion of P that is visible is translation-congruent to V. The problem arises in the localization of robots equipped with a range-nder and a compass | P is a map of a known environment, V is the portion visible from the robot's position, and the robot m...
متن کاملChemical Source Localization using Mobile Robots in Indoor Arena
This paper presents a virtual-physics force based control strategy for swarm robotic chemical source localization. The control force includes: structure formation force, goal force, and obstacle avoidant force. For swarm formation, the robots maintain the regular polygon formation and a virtual robot is located at the center of the polygon. The motion of the virtual robot depends on the goal fo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998